779 research outputs found

    Solar activity during the Holocene: the Hallstatt cycle and its consequence for grand minima and maxim

    Full text link
    Cosmogenic isotopes provide the only quantitative proxy for analyzing the long-term solar variability over a centennial timescale. While essential progress has been achieved in both measurements and modeling of the cosmogenic proxy, uncertainties still remain in the determination of the geomagnetic dipole moment evolution. Here we improve the reconstruction of solar activity over the past nine millennia using a multi-proxy approach. We used records of the 14C and 10Be cosmogenic isotopes, current numerical models of the isotope production and transport in Earth's atmosphere, and available geomagnetic field reconstructions, including a new reconstruction relying on an updated archeo-/paleointensity database. The obtained series were analyzed using the singular spectrum analysis (SSA) method to study the millennial-scale trends. A new reconstruction of the geomagnetic dipole field moment, GMAG.9k, is built for the last nine millennia. New reconstructions of solar activity covering the last nine millennia, quantified in sunspot numbers, are presented and analyzed. A conservative list of grand minima and maxima is provided. The primary components of the reconstructed solar activity, as determined using the SSA method, are different for the series based on 14C and 10Be. These primary components can only be ascribed to long-term changes in the terrestrial system and not to the Sun. They have been removed from the reconstructed series. In contrast, the secondary SSA components of the reconstructed solar activity are found to be dominated by a common ~2400-yr quasi-periodicity, the so-called Hallstatt cycle, in both the 14C and 10Be based series. This Hallstatt cycle thus appears to be related to solar activity. Finally, we show that the grand minima and maxima occurred intermittently over the studied period, with clustering near highs and lows of the Hallstatt cycle, respectively.Comment: In press in Astronomy & Astrophysics, doi: 10.1051/0004-6361/20152729

    Relating L-Resilience and Wait-Freedom via Hitting Sets

    Full text link
    The condition of t-resilience stipulates that an n-process program is only obliged to make progress when at least n-t processes are correct. Put another way, the live sets, the collection of process sets such that progress is required if all the processes in one of these sets are correct, are all sets with at least n-t processes. We show that the ability of arbitrary collection of live sets L to solve distributed tasks is tightly related to the minimum hitting set of L, a minimum cardinality subset of processes that has a non-empty intersection with every live set. Thus, finding the computing power of L is NP-complete. For the special case of colorless tasks that allow participating processes to adopt input or output values of each other, we use a simple simulation to show that a task can be solved L-resiliently if and only if it can be solved (h-1)-resiliently, where h is the size of the minimum hitting set of L. For general tasks, we characterize L-resilient solvability of tasks with respect to a limited notion of weak solvability: in every execution where all processes in some set in L are correct, outputs must be produced for every process in some (possibly different) participating set in L. Given a task T, we construct another task T_L such that T is solvable weakly L-resiliently if and only if T_L is solvable weakly wait-free

    Anonymous Asynchronous Systems: The Case of Failure Detectors

    Get PDF
    Due the multiplicity of loci of control, a main issue distributed systems have to cope with lies in the uncertainty on the system state created by the adversaries that are asynchrony, failures, dynamicity, mobility, etc. Considering message-passing systems, this paper considers the uncertainty created by the net effect of three of these adversaries, namely, asynchrony, failures, and anonymity. This means that, in addition to be asynchronous and crash-prone, the processes have no identity. Trivially, agreement problems (e.g., consensus) that cannot be solved in presence of asynchrony and failures cannot be solved either when adding anonymity. The paper consequently proposes anonymous failure detectors to circumvent these impossibilities. It has several contributions. First it presents three classes of failure detectors (denoted AP, A∩ and A∑) and show that they are the anonymous counterparts of the classes of perfect failure detectors, eventual leader failure detectors and quorum failure detectors, respectively. The class A∑ is new and showing it is the anonymous counterpart of the class ∑ is not trivial. Then, the paper presents and proves correct a genuinely anonymous consensus algorithm based on the pair of anonymous failure detector classes (A∩, A∑) (“genuinely” means that, not only processes have no identity, but no process is aware of the total number of processes). This new algorithm is not a “straightforward extension” of an algorithm designed for non-anonymous systems. To benefit from A∑, it uses a novel message exchange pattern where each phase of every round is made up of sub-rounds in which appropriate control information is exchanged. Finally, the paper discusses the notions of failure detector class hierarchy and weakest failure detector class for a given problem in the context of anonymous systems

    Probing active forces via a fluctuation-dissipation relation: Application to living cells

    Get PDF
    We derive a new fluctuation-dissipation relation for non-equilibrium systems with long-term memory. We show how this relation allows one to access new experimental information regarding active forces in living cells that cannot otherwise be accessed. For a silica bead attached to the wall of a living cell, we identify a crossover time between thermally controlled fluctuations and those produced by the active forces. We show that the probe position is eventually slaved to the underlying random drive produced by the so-called active forces.Comment: 5 page

    History of degenerative spondylolisthesis: From anatomical description to surgical management

    Get PDF
    This review of the historical medical literature aimed at understanding the evolution of surgical management of degenerative spondylolisthesis over time. The Medic@, IndexCat and Gallica historical databases and PubMed and Embase medical databases were used, with several search-terms, exploring the years 1700-2018. Data from anatomical, biomechanical, pathophysiological and surgical studies were compiled. In total, 150 documents were obtained, dating from 1782 to 2018: 139 from PubMed, 1 from Medic@, 7 from IndexCat, and 3 from Gallica. The review thus ranges in time from (1) description of the first clinical cases by several authors in Europe (1782), (2) the identification of a distinct entity by MacNab (1963), and (3) surgical management by the emerging discipline of minimally invasive spine surgery, to its subsequent evolution up to the present day. Spondylolisthesis is a frequent condition potentially responsible for a variety of functional impairments. Understanding and surgical management have progressed since the 20th century. Historically, the first descriptions of treatments concerned only spondylolisthesis associated with spondylolysis, especially in young adults. More recently, there has been progress in the understanding of the disease in elderly people, with the recognition of degenerative spondylolisthesis. New technologies and surgical techniques, aided by advances in supportive care, now provide spine surgeons with powerful treatment tools. Better knowledge of the evolution of surgery throughout history should enable better understanding of current approaches and concepts for treating degenerative spondylolisthesis

    Toward the development of 3-dimensional virtual-reality video tutorials in the French neurosurgical residency program

    Get PDF
    BACKGROUND: The present study developed 3D video tutorials with commentaries, using virtual-reality headsets (VRH). VRHs allow 3D visualization of complex anatomy from the surgeon\u27s point of view. Students can view the surgery repeatedly without missing the essential steps, simultaneously receiving advice from a group of experts in the field. METHODS: A single-center prospective study assessed surgical teaching using 3D video tutorials designed for French neurosurgery and ENT residents participating in the neuro-otology lateral skull-base workshop of the French College of Neurosurgery. At the end of the session, students filled out an evaluation form with 5-point Likert scale to assess the teaching and the positive and negative points of this teaching tool. RESULTS: 22 residents in neurosurgery (n=17, 81.0%) and ENT (n=5) were included. 18 felt that the 3D video enhanced their understanding of the surgical approach (81.8 %). 15 (68.2%) thought the video provided good 3D visualization of anatomical structures and 20 that it enabled better understanding of anatomical relationships (90.9%). Most students had positive feelings about ease of use and their experience of the 3D video tutorial (n=14, 63.6%). 20 (90.9%) enjoyed using the video. 12 (54.5%) considered that the cadaver dissection workshop was more instructive. CONCLUSIONS: 3D video via a virtual reality headset is an innovative teaching tool, approved by the students themselves. A future study should evaluate its long-term contribution, so as to determine its role in specialized neurosurgery and ENT diploma courses

    Solving atomic multicast when groups crash

    Get PDF
    In this paper, we study the atomic multicast problem, a fundamental abstraction for building faulttolerant systems. In the atomic multicast problem, the system is divided into non-empty and disjoint groups of processes. Multicast messages may be addressed to any subset of groups, each message possibly being multicast to a different subset. Several papers previously studied this problem either in local area networks [3, 9, 20] or wide area networks [13, 21]. However, none of them considered atomic multicast when groups may crash. We present two atomic multicast algorithms that tolerate the crash of groups. The first algorithm tolerates an arbitrary number of failures, is genuine (i.e., to deliver a message m, only addressees of m are involved in the protocol), and uses the perfect failures detector P. We show that among realistic failure detectors, i.e., those that do not predict the future, P is necessary to solve genuine atomic multicast if we do not bound the number of processes that may fail. Thus, P is the weakest realistic failure detector for solving genuine atomic multicast when an arbitrary number of processes may crash. Our second algorithm is non-genuine and less resilient to process failures than the first algorithm but has several advantages: (i) it requires perfect failure detection within groups only, and not across the system, (ii) as we show in the paper it can be modified to rely on unreliable failure detection at the cost of a weaker liveness guarantee, and (iii) it is fast, messages addressed to multiple groups may be delivered within two inter-group message delays only

    Charge distribution in two-dimensional electrostatics

    Full text link
    We examine the stability of ringlike configurations of N charges on a plane interacting through the potential V(z1,...,zN)=∑i∣zi∣2−∑i<jln∣zi−zj∣2V(z_1,...,z_N)=\sum_i |z_i|^2-\sum_{i<j} ln|z_i-z_j|^2. We interpret the equilibrium distributions in terms of a shell model and compare predictions of the model with the results of numerical simulations for systems with up to 100 particles.Comment: LaTe

    Classification in sparse, high dimensional environments applied to distributed systems failure prediction

    Get PDF
    Network failures are still one of the main causes of distributed systems’ lack of reliability. To overcome this problem we present an improvement over a failure prediction system, based on Elastic Net Logistic Regression and the application of rare events prediction techniques, able to work with sparse, high dimensional datasets. Specifically, we prove its stability, fine tune its hyperparameter and improve its industrial utility by showing that, with a slight change in dataset creation, it can also predict the location of a failure, a key asset when trying to take a proactive approach to failure management

    Results from the LHC Beam Dump Reliability Run

    Get PDF
    The LHC Beam Dumping System is one of the vital elements of the LHC Machine Protection System and has to operate reliably every time a beam dump request is made. Detailed dependability calculations have been made, resulting in expected rates for the different system failure modes. A 'reliability run' of the whole system, installed in its final configuration in the LHC, has been made to discover infant mortality problems and to compare the occurrence of the measured failure modes with their calculations
    • 

    corecore